Talk by Dan Stowell from Queen Mary, University of London. Given to the Redwood Center for Theoretical Neuroscience at UC Berkeley.
Abstract In songbird vocalisation, the choice and sequencing of units (syllables) are widely studied, and amenable to standard machine learning methods. However, the fine detail in the timing of those sequences is informative but often neglected. We introduce methods for making inferences about a sound scene containing multiple individuals, including a point process method which uses timing to infer details of the communication network in a group of birds.
Dan Stowell is a researcher in machine listening - using computation to understand sound signals. He co-leads the Machine Listening Lab at Queen Mary University of London, based in the Centre for Digital Music. Dan has worked on voice, music and environmental soundscapes, and is currently leading a five-year EPSRC fellowship project researching the automatic analysis of bird sounds.